64 research outputs found

    Translating clinical training into practice in complex mental health systems: Toward opening the 'Black Box' of implementation

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Implementing clinical training in a complex health care system is challenging. This report describes two successive trainings programs in one Veterans Affairs healthcare network and the lessons we drew from their success and failures. The first training experience led us to appreciate the value of careful implementation planning while the second suggested that use of an external facilitator might be an especially effective implementation component. We also describe a third training intervention in which we expect to more rigorously test our hypothesis regarding the value of external facilitation.</p> <p>Results</p> <p>Our experiences appear to be consonant with the implementation model proposed by Fixsen. In this paper we offer a modified version of the Fixsen model with separate components related to training and implementation.</p> <p>Conclusion</p> <p>This report further reinforces what others have noted, namely that educational interventions intended to change clinical practice should employ a multilevel approach if patients are to truly benefit from new skills gained by clinicians. We utilize an implementation research model to illustrate how the aims of the second intervention were realized and sustained over the 12-month follow-up period, and to suggest directions for future implementation research. The present report attests to the validity of, and contributes to, the emerging literature on implementation research.</p

    Is the involvement of opinion leaders in the implementation of research findings a feasible strategy?

    Get PDF
    BACKGROUND: There is only limited empirical evidence about the effectiveness of opinion leaders as health care change agents. AIM: To test the feasibility of identifying, and the characteristics of, opinion leaders using a sociometric instrument and a self-designating instrument in different professional groups within the UK National Health Service. DESIGN: Postal questionnaire survey. SETTING AND PARTICIPANTS: All general practitioners, practice nurses and practice managers in two regions of Scotland. All physicians and surgeons (junior hospital doctors and consultants) and medical and surgical nursing staff in two district general hospitals and one teaching hospital in Scotland, as well as all Scottish obstetric and gynaecology, and oncology consultants. RESULTS: Using the sociometric instrument, the extent of social networks and potential coverage of the study population in primary and secondary care was highly idiosyncratic. In contrast, relatively complex networks with good coverage rates were observed in both national specialty groups. Identified opinion leaders were more likely to have the expected characteristics of opinion leaders identified from diffusion and social influence theories. Moreover, opinion leaders appeared to be condition-specific. The self-designating instrument identified more opinion leaders, but it was not possible to estimate the extent and structure of social networks or likely coverage by opinion leaders. There was poor agreement in the responses to the sociometric and self-designating instruments. CONCLUSION: The feasibility of identifying opinion leaders using an off-the-shelf sociometric instrument is variable across different professional groups and settings within the NHS. Whilst it is possible to identify opinion leaders using a self-designating instrument, the effectiveness of such opinion leaders has not been rigorously tested in health care settings. Opinion leaders appear to be monomorphic (different leaders for different issues). Recruitment of opinion leaders is unlikely to be an effective general strategy across all settings and professional groups; the more specialised the group, the more opinion leaders may be a useful strategy

    Evaluating the successful implementation of evidence into practice using the PARiHS framework : theoretical and practical challenges

    Get PDF
    Background The PARiHS framework (Promoting Action on Research Implementation in Health Services) has proved to be a useful practical and conceptual heuristic for many researchers and practitioners in framing their research or knowledge translation endeavours. However, as a conceptual framework it still remains untested and therefore its contribution to the overall development and testing of theory in the field of implementation science is largely unquantified. Discussion This being the case, the paper provides an integrated summary of our conceptual and theoretical thinking so far and introduces a typology (derived from social policy analysis) used to distinguish between the terms conceptual framework, theory and model – important definitional and conceptual issues in trying to refine theoretical and methodological approaches to knowledge translation. Secondly, the paper describes the next phase of our work, in particular concentrating on the conceptual thinking and mapping that has led to the generation of the hypothesis that the PARiHS framework is best utilised as a two-stage process: as a preliminary (diagnostic and evaluative) measure of the elements and sub-elements of evidence (E) and context (C), and then using the aggregated data from these measures to determine the most appropriate facilitation method. The exact nature of the intervention is thus determined by the specific actors in the specific context at a specific time and place. In the process of refining this next phase of our work, we have had to consider the wider issues around the use of theories to inform and shape our research activity; the ongoing challenges of developing robust and sensitive measures; facilitation as an intervention for getting research into practice; and finally to note how the current debates around evidence into practice are adopting wider notions that fit innovations more generally. Summary The paper concludes by suggesting that the future direction of the work on the PARiHS framework is to develop a two-stage diagnostic and evaluative approach, where the intervention is shaped and moulded by the information gathered about the specific situation and from participating stakeholders. In order to expedite the generation of new evidence and testing of emerging theories, we suggest the formation of an international research implementation science collaborative that can systematically collect and analyse experiences of using and testing the PARiHS framework and similar conceptual and theoretical approaches. We also recommend further refinement of the definitions around conceptual framework, theory, and model, suggesting a wider discussion that embraces multiple epistemological and ontological perspectives

    Applying the quality improvement collaborative method to process redesign: a multiple case study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Despite the widespread use of quality improvement collaboratives (QICs), evidence underlying this method is limited. A QIC is a method for testing and implementing evidence-based changes quickly across organisations. To extend the knowledge about conditions under which QICs can be used, we explored in this study the applicability of the QIC method for process redesign.</p> <p>Methods</p> <p>We evaluated a Dutch process redesign collaborative of seventeen project teams using a multiple case study design. The goals of this collaborative were to reduce the time between the first visit to the outpatient's clinic and the start of treatment and to reduce the in-hospital length of stay by 30% for involved patient groups. Data were gathered using qualitative methods, such as document analysis, questionnaires, semi-structured interviews and participation in collaborative meetings.</p> <p>Results</p> <p>Application of the QIC method to process redesign proved to be difficult. First, project teams did not use the provided standard change ideas, because of their need for customised solutions that fitted with context-specific causes of waiting times and delays. Second, project teams were not capable of testing change ideas within short time frames due to: the need for tailoring changes ideas and the complexity of aligning interests of involved departments; small volumes of involved patient groups; and inadequate information and communication technology (ICT) support. Third, project teams did not experience peer stimulus because they saw few similarities between their projects, rarely shared experiences, and did not demonstrate competitive behaviour. Besides, a number of project teams reported that organisational and external change agent support was limited.</p> <p>Conclusions</p> <p>This study showed that the perceived need for tailoring standard change ideas to local contexts and the complexity of aligning interests of involved departments hampered the use of the QIC method for process redesign. We cannot determine whether the QIC method would have been appropriate for process redesign. Peer stimulus was non-optimal as a result of the selection process for participation of project teams by the external change agent. In conclusion, project teams felt that necessary preconditions for successful use of the QIC method were lacking.</p

    Factors influencing success in quality-improvement collaboratives: development and psychometric testing of an instrument

    Get PDF
    Contains fulltext : 88630.pdf (publisher's version ) (Open Access)ABSTRACT: BACKGROUND: To increase the effectiveness of quality-improvement collaboratives (QICs), it is important to explore factors that potentially influence their outcomes. For this purpose, we have developed and tested the psychometric properties of an instrument that aims to identify the features that may enhance the quality and impact of collaborative quality-improvement approaches. The instrument can be used as a measurement instrument to retrospectively collect information about perceived determinants of success. In addition, it can be prospectively applied as a checklist to guide initiators, facilitators, and participants of QICs, with information about how to perform or participate in a collaborative with theoretically optimal chances of success. Such information can be used to improve collaboratives. METHODS: We developed an instrument with content validity based on literature and the opinions of QIC experts. We collected data from 144 healthcare professionals in 44 multidisciplinary improvement teams participating in two QICs and used exploratory factor analysis to assess the construct validity. We used Cronbach's alpha to ascertain the internal consistency. RESULTS: The 50-item instrument we developed reflected expert-opinion-based determinants of success in a QIC. We deleted nine items after item reduction. On the basis of the factor analysis results, one item was dropped, which resulted in a 40-item questionnaire. Exploratory factor analysis showed that a three-factor model provided the best fit. The components were labeled 'sufficient expert team support', 'effective multidisciplinary teamwork', and 'helpful collaborative processes'. Internal consistency reliability was excellent (alphas between .85 and .89). CONCLUSIONS: This newly developed instrument seems a promising tool for providing healthcare workers and policy makers with useful information about determinants of success in QICs. The psychometric properties of the instrument are satisfactory and warrant application either as an objective measure or as a checklist

    Quantitative data management in quality improvement collaboratives

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Collaborative approaches in quality improvement have been promoted since the introduction of the Breakthrough method. The effectiveness of this method is inconclusive and further independent evaluation of the method has been called for. For any evaluation to succeed, data collection on interventions performed within the collaborative and outcomes of those interventions is crucial. Getting enough data from Quality Improvement Collaboratives (QICs) for evaluation purposes, however, has proved to be difficult. This paper provides a retrospective analysis on the process of data management in a Dutch Quality Improvement Collaborative. From this analysis general failure and success factors are identified.</p> <p>Discussion</p> <p>This paper discusses complications and dilemma's observed in the set-up of data management for QICs. An overview is presented of signals that were picked up by the data management team. These signals were used to improve the strategies for data management during the program and have, as far as possible, been translated into practical solutions that have been successfully implemented.</p> <p>The recommendations coming from this study are:</p> <p>From our experience it is clear that quality improvement programs deviate from experimental research in many ways. It is not only impossible, but also undesirable to control processes and standardize data streams. QIC's need to be clear of data protocols that do not allow for change. It is therefore minimally important that when quantitative results are gathered, these results are accompanied by qualitative results that can be used to correctly interpret them.</p> <p>Monitoring and data acquisition interfere with routine. This makes a database collecting data in a QIC an intervention in itself. It is very important to be aware of this in reporting the results. Using existing databases when possible can overcome some of these problems but is often not possible given the change objective of QICs.</p> <p>Introducing a standardized spreadsheet to the teams is a very practical and helpful tool in collecting standardized data within a QIC. It is vital that the spreadsheets are handed out before baseline measurements start.</p

    Barriers to implementation of a computerized decision support system for depression: an observational report on lessons learned in "real world" clinical settings

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Despite wide promotion, clinical practice guidelines have had limited effect in changing physician behavior. Effective implementation strategies to date have included: multifaceted interventions involving audit and feedback, local consensus processes, marketing; reminder systems, either manual or computerized; and interactive educational meetings. In addition, there is now growing evidence that contextual factors affecting implementation must be addressed such as organizational support (leadership procedures and resources) for the change and strategies to implement and maintain new systems.</p> <p>Methods</p> <p>To examine the feasibility and effectiveness of implementation of a computerized decision support system for depression (CDSS-D) in routine public mental health care in Texas, fifteen study clinicians (thirteen physicians and two advanced nurse practitioners) participated across five sites, accruing over 300 outpatient visits on 168 patients.</p> <p>Results</p> <p>Issues regarding computer literacy and hardware/software requirements were identified as initial barriers. Clinicians also reported concerns about negative impact on workflow and the potential need for duplication during the transition from paper to electronic systems of medical record keeping.</p> <p>Conclusion</p> <p>The following narrative report based on observations obtained during the initial testing and use of a CDSS-D in clinical settings further emphasizes the importance of taking into account organizational factors when planning implementation of evidence-based guidelines or decision support within a system.</p

    Understanding the implementation of evidence-based care: A structural network approach

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Recent study of complex networks has yielded many new insights into phenomenon such as social networks, the internet, and sexually transmitted infections. The purpose of this analysis is to examine the properties of a network created by the 'co-care' of patients within one region of the Veterans Health Affairs.</p> <p>Methods</p> <p>Data were obtained for all outpatient visits from 1 October 2006 to 30 September 2008 within one large Veterans Integrated Service Network. Types of physician within each clinic were nodes connected by shared patients, with a weighted link representing the number of shared patients between each connected pair. Network metrics calculated included edge weights, node degree, node strength, node coreness, and node betweenness. Log-log plots were used to examine the distribution of these metrics. Sizes of k-core networks were also computed under multiple conditions of node removal.</p> <p>Results</p> <p>There were 4,310,465 encounters by 266,710 shared patients between 722 provider types (nodes) across 41 stations or clinics resulting in 34,390 edges. The number of other nodes to which primary care provider nodes have a connection (172.7) is 42% greater than that of general surgeons and two and one-half times as high as cardiology. The log-log plot of the edge weight distribution appears to be linear in nature, revealing a 'scale-free' characteristic of the network, while the distributions of node degree and node strength are less so. The analysis of the k-core network sizes under increasing removal of primary care nodes shows that about 10 most connected primary care nodes play a critical role in keeping the <it>k</it>-core networks connected, because their removal disintegrates the highest <it>k</it>-core network.</p> <p>Conclusions</p> <p>Delivery of healthcare in a large healthcare system such as that of the US Department of Veterans Affairs (VA) can be represented as a complex network. This network consists of highly connected provider nodes that serve as 'hubs' within the network, and demonstrates some 'scale-free' properties. By using currently available tools to explore its topology, we can explore how the underlying connectivity of such a system affects the behavior of providers, and perhaps leverage that understanding to improve quality and outcomes of care.</p

    Understanding organisational development, sustainability, and diffusion of innovations within hospitals participating in a multilevel quality collaborative

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Between 2004 and 2008, 24 Dutch hospitals participated in a two-year multilevel quality collaborative (MQC) comprised of (a) a leadership programme for hospital executives, (b) six quality-improvement collaboratives (QICs) for healthcare professionals and other staff, and (c) an internal programme organisation to help senior management monitor and coordinate team progress. The MQC aimed to stimulate the development of quality-management systems and the spread of methods to improve patient safety and logistics. The objective of this study is to describe how the first group of eight MQC hospitals sustained and disseminated improvements made and the quality methods used.</p> <p>Methods</p> <p>The approach followed by the hospitals was described using interview and questionnaire data gathered from eight programme coordinators.</p> <p>Results</p> <p>MQC hospitals followed a systematic strategy of diffusion and sustainability. Hospital quality-management systems are further developed according to a model linking plan-do-study-act cycles at the unit and hospital level. The model involves quality norms based on realised successes, performance agreements with unit heads, organisational support, monitoring, and quarterly accountability reports.</p> <p>Conclusions</p> <p>It is concluded from this study that the MQC contributed to organisational development and dissemination within participating hospitals. Organisational learning effects were demonstrated. System changes affect the context factors in the theory of organisational readiness: organisational culture, policies and procedures, past experience, organisational resources, and organisational structure. Programme coordinator responses indicate that these factors are utilised to manage spread and sustainability. Further research is needed to assess long-term effects.</p

    Short- and long-term effects of a quality improvement collaborative on diabetes management

    Get PDF
    Introduction: This study examined the short- and long-term effects of a quality improvement collaborative on patient outcomes, professional performance, and structural aspects of chronic care management of type 2 diabetes in an integrated care setting.Methods: Controlled pre- and post-intervention study assessing patient outcomes (hemoglobin A1c, cholesterol, blood pressure, weight, blood lipid levels, and smoking status), professional performance (guideline adherence), and structural aspects of chronic care management from baseline up to 24 months. Analyses were based on 1,861 patients with diabetes in six intervention and nine control regions representing 37 general practices and 13 outpatient clinics.Results: Modest but significant improvement was seen in mean systolic blood pressure (decrease by 4.0 mm Hg versus 1.6 mm Hg) and mean high density lipoprotein levels (increase by 0.12 versus 0.03 points) at two-year follow up. Positive but insignificant differences were found in hemoglobin A1c (0.3%), cholesterol, and blood lipid levels. The intervention group showed significant improvement in the percentage of patients receiving advice and instruction to examine feet, and smaller reductions in the percentage of patients receiving instruction to monitor blood glucose and visiting a dietician annually. Structural aspects of self-management and decision support also improved significantly.Conclusions: At a time of heightened national attention toward diabetes care, our results demonstrate a modest benefit of participation in a multi-institutional quality improvement collaborative focusing on integrated, patient-centered care. The effects persisted for at least 12 months after the intervention was completed.Trial number: http://clinicaltrials.gov Identifier: NCT 00160017
    corecore